Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available October 1, 2026
- 
            Free, publicly-accessible full text available January 13, 2026
- 
            Free, publicly-accessible full text available October 31, 2025
- 
            Mulzer, Wolfgang; Phillips, Jeff M (Ed.)Metric spaces (X, d) are ubiquitous objects in mathematics and computer science that allow for capturing pairwise distance relationships d(x, y) between points x, y ∈ X. Because of this, it is natural to ask what useful generalizations there are of metric spaces for capturing "k-wise distance relationships" d(x_1, …, x_k) among points x_1, …, x_k ∈ X for k > 2. To that end, Gähler (Math. Nachr., 1963) (and perhaps others even earlier) defined k-metric spaces, which generalize metric spaces, and most notably generalize the triangle inequality d(x₁, x₂) ≤ d(x₁, y) + d(y, x₂) to the "simplex inequality" d(x_1, …, x_k) ≤ ∑_{i=1}^k d(x_1, …, x_{i-1}, y, x_{i+1}, …, x_k). (The definition holds for any fixed k ≥ 2, and a 2-metric space is just a (standard) metric space.) In this work, we introduce strong k-metric spaces, k-metric spaces that satisfy a topological condition stronger than the simplex inequality, which makes them "behave nicely." We also introduce coboundary k-metrics, which generalize 𝓁_p metrics (and in fact all finite metric spaces induced by norms) and minimum bounding chain k-metrics, which generalize shortest path metrics (and capture all strong k-metrics). Using these definitions, we prove analogs of a number of fundamental results about embedding finite metric spaces including Fréchet embedding (isometric embedding into 𝓁_∞) and isometric embedding of all tree metrics into 𝓁₁. We also study relationships between families of (strong) k-metrics, and show that natural quantities, like simplex volume, are strong k-metrics.more » « less
- 
            Kumar, Amit; Ron-Zewi, Noga (Ed.)We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n × n matrices A, B, and C as input, to decide whether AB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in Õ(n²) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(n^ω) time). To that end, we give two algorithms for MMV in the case where AB - C is sparse. Specifically, when AB - C has at most O(n^δ) non-zero entries for a constant 0 ≤ δ < 2, we give (1) a deterministic O(n^(ω-ε))-time algorithm for constant ε = ε(δ) > 0, and (2) a randomized Õ(n²)-time algorithm using δ/2 ⋅ log₂ n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of Künnemann (ESA, 2018) when δ ≥ 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log₂ n + O(1) random bits (in turn fewer than Freivalds’s algorithm). Our algorithms are simple and use techniques from coding theory. Let H be a parity-check matrix of a Maximum Distance Separable (MDS) code, and let G = (I | G') be a generator matrix of a (possibly different) MDS code in systematic form. Our deterministic algorithm uses fast rectangular matrix multiplication to check whether HAB = HC and H(AB)^T = H(C^T), and our randomized algorithm samples a uniformly random row g' from G' and checks whether g'AB = g'C and g'(AB)^T = g'C^T. We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(n^ω) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic Õ(n²)-time reductions).more » « less
- 
            Hazay, Carmit; Stam, Martijn (Ed.)We study the computational problem of finding a shortest non-zero vector in a rotation of ℤ𝑛 , which we call ℤ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for ℤ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve ℤ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2𝑛+𝑜(𝑛) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for ℤ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if ℤ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that ℤ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce ℤ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of ℤ SVP from SVP. As a consequence of this reduction, we obtain a 2𝑛/2+𝑜(𝑛) -time algorithm for ℤ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices—semi-stable lattices with not-too-large 𝜆1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) ℤ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of ℤ𝑛 from either a lattice with all non-zero vectors longer than 𝑛/log𝑛‾‾‾‾‾‾‾√ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of ℤ𝑛 . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that “ℤ𝑛 has the largest smoothing parameter.” We show a distribution of bases 𝐁 for rotations of ℤ𝑛 such that, if ℤ SVP is hard for any input basis, then ℤ SVP is hard on input 𝐁 . This gives a satisfying theoretical resolution to the problem of sampling hard bases for ℤ𝑛 , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of ℤ𝑛 that are generated in different ways and how heuristic sieving algorithms perform on ℤ𝑛 . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the “provably hard” distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on ℤ𝑛 .more » « less
- 
            Servedio, Rocco (Ed.)We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if O˜(n‾√)-approximate SVP is hard for 2εn-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from O˜(n)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if O˜(n)-approximate SVP is hard for 2εn-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from O˜(n1.5)-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from O˜(n2)-approximate SVP. 3. We show a 2εn-time coAM protocol for O(1)-approximate CVP, generalizing the celebrated polynomial-time protocol for O(n/logn‾‾‾‾‾‾‾√)-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a 2εn-time co-non-deterministic protocol for O(logn‾‾‾‾‾√)-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for O(n‾√)-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for O(1)-approximate CVP with a 2εn-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available